Estimating divergence functionals and the likelihood ratio by penalized convex risk minimization
نویسندگان
چکیده
We develop and analyze an algorithm for nonparametric estimation of divergence functionals and the density ratio of two probability distributions. Our method is based on a variational characterization of f -divergences, which turns the estimation into a penalized convex risk minimization problem. We present a derivation of our kernel-based estimation algorithm and an analysis of convergence rates for the estimator. Our simulation results demonstrate the convergence behavior of the method, which compares favorably with existing methods in the literature.
منابع مشابه
Penalized Bregman Divergence Estimation via Coordinate Descent
Variable selection via penalized estimation is appealing for dimension reduction. For penalized linear regression, Efron, et al. (2004) introduced the LARS algorithm. Recently, the coordinate descent (CD) algorithm was developed by Friedman, et al. (2007) for penalized linear regression and penalized logistic regression and was shown to gain computational superiority. This paper explores...
متن کاملA ug 2 00 8 Statistical models , likelihood , penalized likelihood and hierarchical likelihood
We give an overview of statistical models and likelihood, together with two of its variants: penalized and hierarchical likelihood. The Kullback-Leibler divergence is referred to repeatedly, for defining the misspecification risk of a model, for grounding the likelihood and the likelihood crossvalidation which can be used for choosing weights in penalized likelihood. Families of penalized likel...
متن کاملNonparametric estimating equations based on a penalized information criterion
It has recently been observed that, given the mean-variance relation, one can improve on the accuracy of the quasi-likelihood estimator by the adaptive estimator based on the estimation of the higher moments. The estimation of such moments is usually unstable, however, and consequently only for large samples does the improvement become evident. The author proposes a nonparametric estimating equ...
متن کاملConvergence Analysis of Generalized Iteratively Reweighted Least Squares Algorithms on Convex Function Spaces
The computation of robust regression estimates often relies on minimization of a convex functional on a convex set. In this paper we discuss a general technique for a large class of convex functionals to compute the minimizers iteratively which is closely related to majorization-minimization algorithms. Our approach is based on a quadratic approximation of the functional to be minimized and inc...
متن کاملOptimization Methods for Sparse Pseudo-Likelihood Graphical Model Selection
Sparse high dimensional graphical model selection is a popular topic in contemporary machine learning. To this end, various useful approaches have been proposed in the context of `1-penalized estimation in the Gaussian framework. Though many of these inverse covariance estimation approaches are demonstrably scalable and have leveraged recent advances in convex optimization, they still depend on...
متن کامل